Keyword [SVM]
Biggio B , Nelson B , Laskov P . Poisoning Attacks against Support Vector Machines[J]. 2012.
1. Overview
In this paper, it investigates a family of poisoning attacks against SVM. Such attacks inject specially crafted training data that increasing the SVM’s test error.
- use a gradient ascent strategy
- the gradient is computed based on properties of the SVM’s optimal solution
1.1. Attack Category
1.1.1. Causative
- manipulation of training data. such as poisoning attack
1.1.2. Exploratory
- exploitation of the classifier
1.2. Discussion
- as an attacker usually cannot directly access an existing training databse but may provide new training data
- poisoning attacks have been previously studied only for simple anomaly detection methods
- assume that the attacker knows the learning algorithm and can draw data from the underlying data distribution
- further assume that attacker knows the training data used by the learner
1.3. Overview
- the attacker’s goal is to find a point (x_c, y_c), whose addition to training data D_tr maximally decreases the SVM’sclassification accuracy
- y_c. attacking class
- the attacker proceeds by drawing a validation data set D_val and maximizing the hinge loss incurred on D_val by the SVM trained on D_tr ∪(x_c, y_c)